33 research outputs found
Cutting out the middleman: measuring nuclear area in histopathology slides without segmentation
The size of nuclei in histological preparations from excised breast tumors is
predictive of patient outcome (large nuclei indicate poor outcome).
Pathologists take into account nuclear size when performing breast cancer
grading. In addition, the mean nuclear area (MNA) has been shown to have
independent prognostic value. The straightforward approach to measuring nuclear
size is by performing nuclei segmentation. We hypothesize that given an image
of a tumor region with known nuclei locations, the area of the individual
nuclei and region statistics such as the MNA can be reliably computed directly
from the image data by employing a machine learning model, without the
intermediate step of nuclei segmentation. Towards this goal, we train a deep
convolutional neural network model that is applied locally at each nucleus
location, and can reliably measure the area of the individual nuclei and the
MNA. Furthermore, we show how such an approach can be extended to perform
combined nuclei detection and measurement, which is reminiscent of
granulometry.Comment: Conditionally accepted for MICCAI 201
Two-Stage Convolutional Neural Network for Breast Cancer Histology Image Classification
This paper explores the problem of breast tissue classification of microscopy
images. Based on the predominant cancer type the goal is to classify images
into four categories of normal, benign, in situ carcinoma, and invasive
carcinoma. Given a suitable training dataset, we utilize deep learning
techniques to address the classification problem. Due to the large size of each
image in the training dataset, we propose a patch-based technique which
consists of two consecutive convolutional neural networks. The first
"patch-wise" network acts as an auto-encoder that extracts the most salient
features of image patches while the second "image-wise" network performs
classification of the whole image. The first network is pre-trained and aimed
at extracting local information while the second network obtains global
information of an input image. We trained the networks using the ICIAR 2018
grand challenge on BreAst Cancer Histology (BACH) dataset. The proposed method
yields 95 % accuracy on the validation set compared to previously reported 77 %
accuracy rates in the literature. Our code is publicly available at
https://github.com/ImagingLab/ICIAR2018Comment: 10 pages, 5 figures, ICIAR 2018 conferenc
Visualizing convolutional neural networks to improve decision support for skin lesion classification
Because of their state-of-the-art performance in computer vision, CNNs are
becoming increasingly popular in a variety of fields, including medicine.
However, as neural networks are black box function approximators, it is
difficult, if not impossible, for a medical expert to reason about their
output. This could potentially result in the expert distrusting the network
when he or she does not agree with its output. In such a case, explaining why
the CNN makes a certain decision becomes valuable information. In this paper,
we try to open the black box of the CNN by inspecting and visualizing the
learned feature maps, in the field of dermatology. We show that, to some
extent, CNNs focus on features similar to those used by dermatologists to make
a diagnosis. However, more research is required for fully explaining their
output.Comment: 8 pages, 6 figures, Workshop on Interpretability of Machine
Intelligence in Medical Image Computing at MICCAI 201
Roto-Translation Covariant Convolutional Networks for Medical Image Analysis
We propose a framework for rotation and translation covariant deep learning
using group convolutions. The group product of the special Euclidean
motion group describes how a concatenation of two roto-translations
results in a net roto-translation. We encode this geometric structure into
convolutional neural networks (CNNs) via group convolutional layers,
which fit into the standard 2D CNN framework, and which allow to generically
deal with rotated input samples without the need for data augmentation.
We introduce three layers: a lifting layer which lifts a 2D (vector valued)
image to an -image, i.e., 3D (vector valued) data whose domain is
; a group convolution layer from and to an -image; and a
projection layer from an -image to a 2D image. The lifting and group
convolution layers are covariant (the output roto-translates with the
input). The final projection layer, a maximum intensity projection over
rotations, makes the full CNN rotation invariant.
We show with three different problems in histopathology, retinal imaging, and
electron microscopy that with the proposed group CNNs, state-of-the-art
performance can be achieved, without the need for data augmentation by rotation
and with increased performance compared to standard CNNs that do rely on
augmentation.Comment: 8 pages, 2 figures, 1 table, accepted at MICCAI 201
Tversky loss function for image segmentation using 3D fully convolutional deep networks
Fully convolutional deep neural networks carry out excellent potential for
fast and accurate image segmentation. One of the main challenges in training
these networks is data imbalance, which is particularly problematic in medical
imaging applications such as lesion segmentation where the number of lesion
voxels is often much lower than the number of non-lesion voxels. Training with
unbalanced data can lead to predictions that are severely biased towards high
precision but low recall (sensitivity), which is undesired especially in
medical applications where false negatives are much less tolerable than false
positives. Several methods have been proposed to deal with this problem
including balanced sampling, two step training, sample re-weighting, and
similarity loss functions. In this paper, we propose a generalized loss
function based on the Tversky index to address the issue of data imbalance and
achieve much better trade-off between precision and recall in training 3D fully
convolutional deep neural networks. Experimental results in multiple sclerosis
lesion segmentation on magnetic resonance images show improved F2 score, Dice
coefficient, and the area under the precision-recall curve in test data. Based
on these results we suggest Tversky loss function as a generalized framework to
effectively train deep neural networks
Deep Learning to Analyze RNA-Seq Gene Expression Data
Deep learning models are currently being applied in several
areas with great success. However, their application for the analysis of high-throughput sequencing data remains a challenge for the research community due to the fact that this family of models are known to work very well in big datasets with lots of samples available, just the opposite scenario typically found in biomedical areas. In this work, a first approximation on the use of deep learning for the analysis of RNA-Seq gene expression profiles data is provided. Three public cancer-related databases are analyzed using a regularized linear model (standard LASSO) as baseline model, and two deep learning models that differ on the feature selection technique used prior to the application of a deep neural net model. The results indicate that a straightforward application of deep nets implementations available in public scientific tools and under the conditions described within this work is not enough to outperform simpler models like LASSO. Therefore, smarter and more complex ways that incorporate prior biological knowledge into the estimation procedure of deep learning models may be necessary in order to obtain better results in terms of predictive performance.Universidad de Málaga. Campus de Excelencia Internacional Andalucía Tech
Transfer learning for cell nuclei classification in histopathology images
Abstract
In histopathological image assessment, there is a high demand to obtain fast and precise quantification automatically. Such automation could be beneficial to find clinical assessment clues to produce correct diagnoses, to reduce observer variability, and to increase objectivity. Due to its success in other areas, deep learning could be the key method to obtain clinical acceptance. However, the major bottleneck is how to train a deep CNN model with a limited amount of training data. There is one important question of critical importance: Could it be possible to use transfer learning and fine-tuning in biomedical image analysis to reduce the effort of manual data labeling and still obtain a full deep representation for the target task? In this study, we address this question quantitatively by comparing the performances of transfer learning and learning from scratch for cell nuclei classification. We evaluate four different CNN architectures trained on natural images and facial images